从SVN通过Hadoop替换mapred / hdfs / common jar后“no namenode to stop”(after replace mapred/hdfs/common jar build via Hadoop from SVN “no namenode to stop”)
我检查了源代码
http://svn.apache.org/repos/asf/hadoop/common
http://svn.apache.org/repos/asf/hadoop/hdfs
http://svn.apache.org/repos/asf/hadoop/mapreduce
并得到hadoop-mapred-0.23.0-SNAPSHOT.jar hadoop-hdfs -0.23.0-SNAPSHOT.jar hadoop-common-0.23.0-SNAPSHOT.jar
但我没有用这些罐开始--all.sh ......
Jobtracker和tasktracker启动仅5秒,并自动关闭......
有人可以帮忙吗?
我试着查看日志
tasktracker说2011-03-01 00:43:06,242 ERROR org.apache.hadoop.io.nativeio.NativeIO:无法初始化NativeIO库java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO.initNative( )位于org.apache.hadoop.f.RawLocalFileSystem上org.apache.hadoop.io.nativeIO.NitIO。(NativeIO.java:55)org.apache.hadoop.io.nativeio.NativeIO.initNative(Native Method)的V处。 .setPermission(RawLocalFileSystem.java:558)org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:352)org.apache.hadoop.mapred.TaskController.setup(TaskController.java:90)org。位于org.apache.hadoop.mapred.TaskTracker。(TaskTracker.java:1391)的apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:698)org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java) :3619)2011-03-01 00:43:12,983错误org.apache.hadoop.mapred.TaskTracker:无法启动任务跟踪器,因为java.io.IOException:调用localhost / 127.0.0.1:9001失败了本地异常: java.io.IOException:Connecti 在org.apache.hadoop.ipc上的org.apache.hadoop.ipc.Client.call(Client.java:1031)的org.apache.hadoop.ipc.Client.wrapException(Client.java:1063)上重置.WritableRpcEngine $ Invoker.invoke(WritableRpcEngine.java:197)位于org.apache.hadoop.mapred。$ Proxy4.getProtocolSignature(未知来源)org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:238)at at位于org.apache.hadoop.ipc.RPC.waitForProtocolProxy的org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:278)的org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422) RPC.java:232)org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:194)atg.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:176)atg.apache.hadoop .mapred.TaskTracker $ 2.run(TaskTracker.java:710)位于org.apache.hadoop的javax.security.auth.Subject.doAs(Subject.java:416)的java.security.AccessController.doPrivileged(Native Method)。 org.apache.hadoop.mapred.TaskTracker.initi上的security.UserGroupInformation.doAs(UserGroupInformation.java:1142) alg(TaskTracker.java:706)org.apache.hadoop.mapred.TaskTracker。(TaskTracker.java:1391)org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3619)引起:java。 io.IOException:在sun.nio.ch.IOUtil.readIntoNativeBuffer的sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)的sun.nio.ch.FileDispatcher.read0(本地方法)上通过对等方重置连接。 IOUtil.java:251)在sun.nio.ch.IOUtil.read(IOUtil.java:224)的sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254)at org.apache.hadoop.net.SocketInputStream位于org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)的$ Reader.performIO(SocketInputStream.java:59)位于org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159)at位于org.apache.hadoop.ipc.Client的$ java.io.FilterInputStream.read(FilterInputStream.java:133)中的org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132)$ Connection $ PingInputStream.read( Client.java:368)在java.io.BufferedInputStream.fill(Buffer edInputStream.java:235)java.io.BuversInputStream.read(BufferedInputStream.java:254)at java.io.DataInputStream.readInt(DataInputStream.java:387)at org.apache.hadoop.ipc.Client $ Connection.receiveResponse (Client.java:760)org.apache.hadoop.ipc.Client $ Connection.run(Client.java:698)
2011-03-01 00:43:12,984 INFO org.apache.hadoop.mapred.TaskTracker:SHUTDOWN_MSG:/ * ** * ** * ** * ** * ** * ** * ** * ** * * * * ** * *** SHUTDOWN_MSG:在Vaio-sz65 / 127.0.1.1关闭TaskTracker * ** * ** * ** * ** * ** * ** * ** * * ** * * * * *** /
I checkout the source code from
http://svn.apache.org/repos/asf/hadoop/common
http://svn.apache.org/repos/asf/hadoop/hdfs
http://svn.apache.org/repos/asf/hadoop/mapreduce
and get hadoop-mapred-0.23.0-SNAPSHOT.jar hadoop-hdfs -0.23.0-SNAPSHOT.jar hadoop-common-0.23.0-SNAPSHOT.jar
but I failed to start-all.sh with these jars...
Jobtracker and tasktracker started for just 5 secs and automatically shut down...
Anyone could help?
I tried to check out the log
tasktracker said 2011-03-01 00:43:06,242 ERROR org.apache.hadoop.io.nativeio.NativeIO: Unable to initialize NativeIO libraries java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO.initNative()V at org.apache.hadoop.io.nativeio.NativeIO.initNative(Native Method) at org.apache.hadoop.io.nativeio.NativeIO.(NativeIO.java:55) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:558) at org.apache.hadoop.fs.FilterFileSystem.setPermission(FilterFileSystem.java:352) at org.apache.hadoop.mapred.TaskController.setup(TaskController.java:90) at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:698) at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1391) at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3619) 2011-03-01 00:43:12,983 ERROR org.apache.hadoop.mapred.TaskTracker: Can not start task tracker because java.io.IOException: Call to localhost/127.0.0.1:9001 failed on local exception: java.io.IOException: Connection reset by peer at org.apache.hadoop.ipc.Client.wrapException(Client.java:1063) at org.apache.hadoop.ipc.Client.call(Client.java:1031) at org.apache.hadoop.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:197) at org.apache.hadoop.mapred.$Proxy4.getProtocolSignature(Unknown Source) at org.apache.hadoop.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:238) at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:422) at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:278) at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:232) at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:194) at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:176) at org.apache.hadoop.mapred.TaskTracker$2.run(TaskTracker.java:710) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:416) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1142) at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:706) at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1391) at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3619) Caused by: java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcher.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:251) at sun.nio.ch.IOUtil.read(IOUtil.java:224) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:254) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:59) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:132) at java.io.FilterInputStream.read(FilterInputStream.java:133) at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:368) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read(BufferedInputStream.java:254) at java.io.DataInputStream.readInt(DataInputStream.java:387) at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:760) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:698)
2011-03-01 00:43:12,984 INFO org.apache.hadoop.mapred.TaskTracker: SHUTDOWN_MSG: /********************************** SHUTDOWN_MSG: Shutting down TaskTracker at Vaio-sz65/127.0.1.1 **********************************/
原文:https://stackoverflow.com/questions/5144780
最满意答案
您可以按
order by
使用多个键和表达式:order by (ytd > 0) desc, -- put positive numbers first ytd asc
You can use multiple keys and expressions in the
order by
:order by (ytd > 0) desc, -- put positive numbers first ytd asc
相关问答
更多-
有什么数据库专业书籍介绍?[2022-05-19]
mysql -
按收入对交易进行排序,这是MySQL中订单表行的总和(Sorting deals by their revenue, which is sum of orders table rows in MySQL)[2021-04-07]
你缺乏GROUP BY子句 SELECT deals.ID, SUM(orders.amount) AS revenue FROM deals INNER JOIN orders ON orders.deal_id = deals.id GROUP BY deals.ID ORDER BY revenue DESC GROUP BY条款 按集合功能分组 you lack GROUP BY clause SELECT deals.ID, SUM(orde ... -
通常,MySQL执行数据操作会更快,一个查询通常比两个查询更有效。 但是,对于大量请求,PHP比MySQL更容易扩展。 这意味着使用PHP进行数据操作会更快(尽管效率更低)。 In general, MySQL will be faster to execute data manipulation, and one query is generally more efficient than two. However, PHP is much easier to scale out than MySQL f ...
-
MongoDB不是关系数据库,因此它无法执行连接(这是您要完成的任务)。 您可能想要更多地考虑数据库的结构。 如果不知道你想要做什么,我会建议将所有类型的专辑放入一个集合中 - 添加一个表示专辑类型的附加字段。 MongoDB is not a relational database, and as such, it cannot perform joins (which is what you are trying to accomplish). You probably want to think mo ...
-
MySQL(客户订单)(MySQL (customers to orders))[2024-02-07]
两种方式都是可能的。 您可以在mysql中使用group_concat()(当然还有GROUP BY)。 但请记住,此函数返回的数据的默认大小仅为1024字节。 或者你可以用PHP中的数据操作来做 - 有时它会更灵活 Both ways are possible. You can use group_concat() in mysql (along with GROUP BY of course). But keep in mind that default size of the data returne ... -
您可以按order by使用多个键和表达式: order by (ytd > 0) desc, -- put positive numbers first ytd asc You can use multiple keys and expressions in the order by: order by (ytd > 0) desc, -- put positive numbers first ytd asc
-
Mysql总和与串联(Mysql sum with sort of concate)[2023-11-12]
从评论开始,您可以使用subselect进行额外的联接,这将是内部联接,并且只会获得产品超过400的客户 SELECT GROUP_CONCAT(DISTINCT CONVERT(o.id, CHAR(11))) orders, o.delivery_from, o.delivery_to, o.customer_id, op.product_id, op.color_id, op.size_id, sum(op.quantity) as quantity FROM or ... -
尝试这个 : SELECT * FROM posts ORDER BY FIELD(author_id, '5') DESC, timestamp Try this : SELECT * FROM posts ORDER BY FIELD(author_id, '5') DESC, timestamp
-
Select * From orders As O Where Not Exists ( Select 1 From order_lines As OL1 Where OL1.Fullfilled Is Null And OL1.OrderId = O.OrderId ) SQL小提琴版 使用 ...
-
你可以试试这个: SELECT * FROM tasks ORDER BY dueDate IS NULL,dueTime you could try this: SELECT * FROM tasks ORDER BY dueDate IS NULL,dueTime