Hadoop namenode守护进程在5秒后静默停止(Hadoop namenode daemon silently stops after 5 seconds)
我对Hadoop的奇迹相对较新,但我正在尝试使用Hadoop 2.7.2版的官方Apache Hadoop文档手动构建集群。 当我运行命令时:
$HADOOP_PREFIX/sbin/hadoop-daemon.sh start namenode
我返回到我的控制台,并显示一条消息,指出守护程序正在启动,以及.out文件的位置。 当我访问该文件路径时,我遇到了:ulimit -a for user hadoop core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 15017 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
再次运行start namenode命令将为我提供与守护程序启动相同的消息以及日志文件的位置(相同位置)。
以下是我查看.log文件时得到的信息:
2016-02-03 16:03:04,092 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = namenode_dns_name/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.7.2
到目前为止我所知道的是:
- 配置文件中的语法错误将导致日志文件顶部出现错误
- 我的namenode守护进程在启动后大约5秒内静默崩溃
- 配置文件中的不正确设置可能导致namenode守护程序静默崩溃
- 我没有在其他问题中看到的这个jps命令
我不知道这是否是我的java版本的问题,但我尝试使用openjdk-1.8.0.65-3和openjdk-1.7.0.95(这两个都没有在官方Hadoop Java版本页面上列出http: //wiki.apache.org/hadoop/HadoopJavaVersions ,但我希望我不仅限于那些版本)
我也不知道我的配置文件是否存在问题,但我会将它们包含在此处进行审核(尽管主机名已被阻止)。 并且可以列出调试所需的任何其他信息。 另外,我试图在同一台机器上运行namenode守护进程和resourcemanager守护进程以进行测试。
感谢您的时间。
HDFS-site.xml中
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/hadoop-2.7.2/hdfs/namenode</value> <description>Path on the local filesystem where the NameNode stores the namespace and transaction logs persistently.</description> </property> <property> <name>dfs.hosts</name> <value>datanode_dns_name</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> </configuration>
核心的site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://namenode_dns_name</value> <description>NameNode URI</description> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration>
纱的site.xml
<?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <property> <name>yarn.acl.enable</name> <value>false</value> </property> <property> <name>yarn.admin.acl</name> <value> </value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>false</value> </property> <property> <name>yarn.resourcemanager.host</name> <value>namenode_dns_name</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>128</value> <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>2</value> <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>4096</value> <description>Physical memory, in MB, to be made available to running containers</description> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>4</value> <description>Number of CPU cores that can be allocated for containers.</description> </property> </configuration>
I am relatively new to the wonders of Hadoop, but I am trying to manually build a cluster using the official Apache Hadoop documentation for Hadoop version 2.7.2. When I run the command:
$HADOOP_PREFIX/sbin/hadoop-daemon.sh start namenode
I am returned to my console with a message stating that the daemon is starting, and the location of the .out file. When I vim to that file path, I am met with:ulimit -a for user hadoop core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 15017 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Running the start namenode command again will give me the same message that the daemon is starting and the location of the log file (same location).
Here is what I get when I look at the .log file:
2016-02-03 16:03:04,092 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = namenode_dns_name/127.0.1.1 STARTUP_MSG: args = [] STARTUP_MSG: version = 2.7.2
What I know so far is that:
- Syntax errors in my configuration files will result in an error at the top of the log file
- My namenode daemon is silently crashing roughly 5 seconds after being started
- Improper settings in my configuration files can result in the namenode daemon silently crashing
- I do not have this jps command that I've seen mentioned in other questions
I do not know if this is an issue with my version of java, but I have tried using openjdk-1.8.0.65-3, and openjdk-1.7.0.95 (neither of which are listed on the official Hadoop Java Versions page here http://wiki.apache.org/hadoop/HadoopJavaVersions, but I'm hoping I'm not restricted to only those versions)
I also do not know if it is an issue with my configuration files, but I will include them here for review (albeit with host names blocked out). And can list any other information necessary for debugging. As an aside, I am attempting to run both the namenode daemon and resourcemanager daemon on the same machine for testing purposes.
Thank you for your time.
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:///home/hadoop/hadoop-2.7.2/hdfs/namenode</value> <description>Path on the local filesystem where the NameNode stores the namespace and transaction logs persistently.</description> </property> <property> <name>dfs.hosts</name> <value>datanode_dns_name</value> </property> <property> <name>dfs.blocksize</name> <value>268435456</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> </configuration>
core-site.xml
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://namenode_dns_name</value> <description>NameNode URI</description> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> </configuration>
yarn-site.xml
<?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <property> <name>yarn.acl.enable</name> <value>false</value> </property> <property> <name>yarn.admin.acl</name> <value> </value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>false</value> </property> <property> <name>yarn.resourcemanager.host</name> <value>namenode_dns_name</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>128</value> <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description> </property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description> </property> <property> <name>yarn.scheduler.maximum-allocation-vcores</name> <value>2</value> <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>4096</value> <description>Physical memory, in MB, to be made available to running containers</description> </property> <property> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>4</value> <description>Number of CPU cores that can be allocated for containers.</description> </property> </configuration>
原文:https://stackoverflow.com/questions/35281444
最满意答案
您可以使用
Exclude
条件类型从联合中排除类型:function isNotFish(pet: Fish | Bird): pet is Exclude<typeof pet, Fish> { return pet.swim === undefined; }
或更通用的版本:
function isNotFishG<T>(pet: T ): pet is Exclude<typeof pet, Fish> { return pet.swim === undefined; } interface Fish { swim: boolean } interface Bird { crow: boolean } let p: Fish | Bird; if (isNotFishG(p)) { p.crow }
You can use the
Exclude
conditional type to exclude types from a union :function isNotFish(pet: Fish | Bird): pet is Exclude<typeof pet, Fish> { return pet.swim === undefined; }
Or a more generic version :
function isNotFishG<T>(pet: T ): pet is Exclude<typeof pet, Fish> { return pet.swim === undefined; } interface Fish { swim: boolean } interface Bird { crow: boolean } let p: Fish | Bird; if (isNotFishG(p)) { p.crow }
相关问答
更多-
是否可以在TypeScript中组合用户定义的类型保护?(Is it possible to combine user defined type guards in TypeScript?)[2023-06-27]
用户定义的typeguard只能返回一个x is T 幸运的是,你可以在你选择的T使用工会和交叉点。 所以,例如: function everythingIsDefined(o: Maybe): o is Obj & {jbo: Jbo} { return typeof o !== 'undefined' && typeof o.jbo !== 'undefined'; } everythingIsDefined函数声明输入既是 Obj (与undefined相对),也是Jbo属性为Jb ... -
这将列出所有表类型及其列: select tt.name, c.name from sys.table_types tt inner join sys.columns c on c.object_id = tt.type_table_object_id order by c.column_id 您可以添加where子句并根据需要选择其他列,以获取所需内容。 This will list all table types and their columns: select tt.name, c.name fr ...
-
看起来您需要将它设置为libraries/Jerry/Jerry.h和libraries/Button/Button.h与草图在同一文件夹中。 查看: http : //arduino.cc/en/Hacking/LibraryTutorial 首先,在sketchbook目录的libraries子目录中创建一个Morse目录。 将Morse.h和Morse.cpp文件复制或移动到该目录中。 现在启动Arduino环境。 如果您打开Sketch> Import Library菜单,您应该看到Morse里面。 ...
-
有各种类型的用户定义类型 - class只是其中之一; 也可以看看: struct enum delegate interface 这里, class告诉编译器您正在定义引用类型的单继承类型。 There are various categories of user-defined types - class is just one of them; see also: struct enum delegate interface Here, class tells the compiler that yo ...
-
用户定义的类型未定义“WithEvents MSForms.ComboBox”(user-defined type not defined “WithEvents MSForms.ComboBox”)[2023-06-24]
您需要设置对Microsoft Forms 2.0对象库的引用。 插入用户表单(然后可以删除它)通常是最快捷的方式。 You need to set a reference to the Microsoft Forms 2.0 object library. Inserting a userform (you can then delete it) is usually the quickest way. -
警卫在“无”设置(Guards on “Nothing” Setting)[2023-08-07]
你应该使用的是模式匹配: myOtherFunction :: Maybe Int -> Int myOtherFunction Nothing = 1 myOtherFunction (Just 1) = 2 myOtherFunction _ = 3 你的不工作的原因是你不能直接比较一个Int到一个Maybe Int 。 任何语言都是如此,而不仅仅是Haskell。 比较两种不同类型通常不起作用。 如果你真的想要使用警卫,你可以这样做 myOtherFunction :: Int -> Maybe I ... -
这会给你一个表类型TVPTest的列的列表 select c.* from sys.table_types as tt inner join sys.columns as c on tt.type_table_object_id = c.object_id where tt.name = 'TVPTest' This will give you a list of columns for the table type TVPTest select c.* from sys.table_type ...
-
构造角度接口文件和用户定义类型防护的公约(Convention for Structuring Angular Interface Files and User Defined Type Guards)[2023-04-21]
如果我理解的话,Angular就是将每一件东西都放在正确的位置。 这就是为什么我们有这样的结构: +用户 user.ts 用户profile.ts 用户dashboard.component.ts 用户dashboard.component.html users.service.ts users.module.ts 其中+用户将是用户的文件夹。 user.ts说, user.ts可以是UserInterface 。 user-profile.ts可以是Class实现UserInterface 。 user- ... -
你不应该使用systypes或syscolumns--它们是向后兼容的视图,并且sys.types和sys.columns是非常受欢迎的,除非你试图编写在SQL Server 2000+上工作的代码(我不建议这么做) 。 要获取关于类型的信息,您已经知道以下名称: SELECT name, precision, scale, max_length FROM sys.types AS t WHERE name = 'bVendor'; 要获取数据库中所有用户定义类型的信息,请执行以下操作: SELE ...
-
负面的用户定义类型的警卫(Negative user-defined type guards)[2023-07-27]
您可以使用Exclude条件类型从联合中排除类型: function isNotFish(pet: Fish | Bird): pet is Exclude{ return pet.swim === undefined; } 或更通用的版本: function isNotFishG (pet: T ): pet is Exclude { return pet.swim === undefined ...