У меня есть вопрос о выделении карты/сокращения. У меня есть код wordcount, который я выполняю. (У меня есть 2 подчиненных и 1 мастер). Как узнать, что ведомый 2 выполнен? Как я могу сделать что-то вроде println
в консоли, чтобы получить информацию, когда ведомое устройство выполнено?Как узнать количество выполненных ведомых
Ну я попробовал это astuce в части функции карты:
String ip = InetAddress.getLocalHost().getHostAddress().toString();
я использую IP, и я посылаю его в качестве ключа (и значение new intWritable
поэтому уменьшить получить IP из slave, который выполняет код карты и записывает его в выходной файл, но это было неудачно, потому что он написал IP-адрес ведомого 1 или ведомого 2, но не адреса 2 подчиненных устройств, и я использую:
DistributedCache.addCacheFile(link,job.getConfiguration());
для распределения всех f ile у всех подчиненных (не часть файла для каждого подчиненного).
и, наконец, я получаю:
узел менеджер журнала:
2015-05-17 22:35:52,855 INFO SecurityLogger.org.apache.hadoop.ipc.Server:
Auth successful for appattempt_1431892621101_0005_000001 (auth:SIMPLE)
2015-05-17 22:35:54,294 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1431892621101_0005_01_000002 by user hduser
2015-05-17 22:35:54,579 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Creating a new application reference for app application_1431892621101_0005
2015-05-17 22:35:54,707 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser IP=192.168.1.11 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000002
2015-05-17 22:35:55,092 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431892621101_0005 transitioned from NEW to INITING
2015-05-17 22:35:55,102 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1431892621101_0005_01_000002 to application application_1431892621101_0005
2015-05-17 22:35:55,145 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431892621101_0005 transitioned from INITING to RUNNING
2015-05-17 22:35:55,642 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from NEW to LOCALIZING
2015-05-17 22:35:55,658 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1431892621101_0005
2015-05-17 22:35:55,663 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1431892621101_0005
2015-05-17 22:35:55,663 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2015-05-17 22:35:55,759 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1431892621101_0005
2015-05-17 22:35:55,909 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://master:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1431892621101_0005/job.jar transitioned from INIT to DOWNLOADING
2015-05-17 22:35:55,909 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://master:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1431892621101_0005/job.xml transitioned from INIT to DOWNLOADING
2015-05-17 22:35:56,085 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Created localizer for container_1431892621101_0005_01_000002
2015-05-17 22:35:56,734 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService: Writing credentials to the nmPrivate file /tmp/hadoop-hduser/nm-local-dir/nmPrivate/container_1431892621101_0005_01_000002.tokens. Credentials list:
2015-05-17 22:36:00,700 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Initializing user hduser
2015-05-17 22:36:01,816 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Copying from /tmp/hadoop-hduser/nm-local-dir/nmPrivate/container_1431892621101_0005_01_000002.tokens to /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/container_1431892621101_0005_01_000002.tokens
2015-05-17 22:36:01,865 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Localizer CWD set to /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005 = file:/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005
2015-05-17 22:36:22,696 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://master:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1431892621101_0005/job.jar(->/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/filecache/10/job.jar) transitioned from DOWNLOADING to LOCALIZED
2015-05-17 22:36:22,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://master:54310/tmp/hadoop-yarn/staging/hduser/.staging/job_1431892621101_0005/job.xml(->/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/filecache/11/job.xml) transitioned from DOWNLOADING to LOCALIZED
2015-05-17 22:36:22,979 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from LOCALIZING to LOCALIZED
2015-05-17 22:36:23,752 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from LOCALIZED to RUNNING
2015-05-17 22:36:23,902 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/container_1431892621101_0005_01_000002/default_container_executor.sh]
2015-05-17 22:36:26,604 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1431892621101_0005_01_000002
2015-05-17 22:36:29,051 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 28.3 MB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used
2015-05-17 22:36:36,703 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 35.3 MB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used
2015-05-17 22:36:42,008 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 66.4 MB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used
2015-05-17 22:36:50,807 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 78.4 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:36:56,039 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 87.3 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:37:13,734 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 233.7 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:37:17,816 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1431892621101_0005_000001 (auth:SIMPLE)
2015-05-17 22:37:18,024 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1431892621101_0005_01_000002
2015-05-17 22:37:18,094 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser IP=192.168.1.11 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000002
2015-05-17 22:37:18,130 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from RUNNING to KILLING
2015-05-17 22:37:18,149 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1431892621101_0005_01_000002
2015-05-17 22:37:20,943 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1431892621101_0005_01_000002 is : 143
2015-05-17 22:37:21,116 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7030 for container-id container_1431892621101_0005_01_000002: 0B of 1 GB physical memory used; 0B of 2.1 GB virtual memory used
2015-05-17 22:37:21,721 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2015-05-17 22:37:21,749 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000002
2015-05-17 22:37:21,749 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000002 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2015-05-17 22:37:21,750 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1431892621101_0005_01_000002 from application application_1431892621101_0005
2015-05-17 22:37:21,769 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1431892621101_0005
2015-05-17 22:37:21,914 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/container_1431892621101_0005_01_000002
2015-05-17 22:37:22,521 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1431892621101_0005_000001 (auth:SIMPLE)
2015-05-17 22:37:22,637 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Start request for container_1431892621101_0005_01_000003 by user hduser
2015-05-17 22:37:22,639 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser IP=192.168.1.11 OPERATION=Start Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000003
2015-05-17 22:37:22,639 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Adding container_1431892621101_0005_01_000003 to application application_1431892621101_0005
2015-05-17 22:37:22,708 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from NEW to LOCALIZING
2015-05-17 22:37:22,709 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_INIT for appId application_1431892621101_0005
2015-05-17 22:37:22,709 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_INIT for appId application_1431892621101_0005
2015-05-17 22:37:22,709 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got APPLICATION_INIT for service mapreduce_shuffle
2015-05-17 22:37:22,710 INFO org.apache.hadoop.mapred.ShuffleHandler: Added token for job_1431892621101_0005
2015-05-17 22:37:22,712 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from LOCALIZING to LOCALIZED
2015-05-17 22:37:23,245 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from LOCALIZED to RUNNING
2015-05-17 22:37:23,294 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/container_1431892621101_0005_01_000003/default_container_executor.sh]
2015-05-17 22:37:24,119 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1431892621101_0005_01_000003
2015-05-17 22:37:24,119 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1431892621101_0005_01_000002
2015-05-17 22:37:25,044 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1431892621101_0005_01_000002]
2015-05-17 22:37:27,019 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 27.4 MB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used
2015-05-17 22:37:32,749 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 43.8 MB of 1 GB physical memory used; 1.7 GB of 2.1 GB virtual memory used
2015-05-17 22:37:39,417 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 75 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:37:46,295 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 104.5 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:37:48,835 INFO org.apache.hadoop.mapred.ShuffleHandler: Setting connection close header...
2015-05-17 22:37:52,871 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 147.5 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:37:56,163 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 151.8 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:38:33,363 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 157.2 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:38:40,907 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 157.1 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:38:57,551 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 161.5 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:39:02,314 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 164.0 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:39:07,150 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 164.0 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:39:10,542 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 164.0 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:39:13,957 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 161.4 MB of 1 GB physical memory used; 1.8 GB of 2.1 GB virtual memory used
2015-05-17 22:39:16,564 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for appattempt_1431892621101_0005_000001 (auth:SIMPLE)
2015-05-17 22:39:16,794 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl: Stopping container with container Id: container_1431892621101_0005_01_000003
2015-05-17 22:39:16,795 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from RUNNING to KILLING
2015-05-17 22:39:16,795 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch: Cleaning up container container_1431892621101_0005_01_000003
2015-05-17 22:39:16,796 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser IP=192.168.1.11 OPERATION=Stop Container Request TARGET=ContainerManageImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000003
2015-05-17 22:39:18,989 WARN org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Exit code from container container_1431892621101_0005_01_000003 is : 143
2015-05-17 22:39:19,847 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from KILLING to CONTAINER_CLEANEDUP_AFTER_KILL
2015-05-17 22:39:19,852 INFO org.apache.hadoop.yarn.server.nodemanager.NMAuditLogger: USER=hduser OPERATION=Container Finished - Killed TARGET=ContainerImpl RESULT=SUCCESS APPID=application_1431892621101_0005 CONTAINERID=container_1431892621101_0005_01_000003
2015-05-17 22:39:19,852 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1431892621101_0005_01_000003 transitioned from CONTAINER_CLEANEDUP_AFTER_KILL to DONE
2015-05-17 22:39:19,852 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Removing container_1431892621101_0005_01_000003 from application application_1431892621101_0005
2015-05-17 22:39:19,852 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event CONTAINER_STOP for appId application_1431892621101_0005
2015-05-17 22:39:19,954 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005/container_1431892621101_0005_01_000003
2015-05-17 22:39:19,983 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 7093 for container-id container_1431892621101_0005_01_000003: 0B of 1 GB physical memory used; 0B of 2.1 GB virtual memory used
2015-05-17 22:39:22,984 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Stopping resource-monitoring for container_1431892621101_0005_01_000003
2015-05-17 22:39:29,850 INFO org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl: Removed completed containers from NM context: [container_1431892621101_0005_01_000003]
2015-05-17 22:39:30,206 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431892621101_0005 transitioned from RUNNING to APPLICATION_RESOURCES_CLEANINGUP
2015-05-17 22:39:30,264 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.AuxServices: Got event APPLICATION_STOP for appId application_1431892621101_0005
2015-05-17 22:39:30,272 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting absolute path : /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1431892621101_0005
2015-05-17 22:39:30,289 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.application.Application: Application application_1431892621101_0005 transitioned from APPLICATION_RESOURCES_CLEANINGUP to FINISHED
2015-05-17 22:39:30,289 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler.NonAggregatingLogHandler: Scheduling Log Deletion for application: application_1431892621101_0005, with delay of 10800 seconds
Я надеюсь, что я найду, например, количество nodemanager
экземпляров, которые выполняют код в этой части прецедента, но я не сделал ,
Как узнать количество ведомых, выполняемых в моем коде? Заранее спасибо.
Я не уверен, если вам просто нужно правильно использовать LOGFJ, но, если это не тот случай, вы должны смотреть на все журналы, которые раба создавать, потому что выход который вы показываете нам здесь, не содержит всей информации, присутствующей в этих журналах. И, если это не полезно, по крайней мере, мы можем отказаться от этой опции и ждать, пока другой пользователь скажет вам другую альтернативу. – chomp
i блокировка в файлах журнала ... да, это нормально, я изменю свой вопрос, и я поставлю лог-файл (диспетчер узлов) ... все нормально, что показано, что узел узла-узла и узел данных работают в то же время, когда я выполняю свой код ... Спасибо заранее –