首页 \ 问答 \ hadoop MapReduce例外与多个节点(hadoop MapReduce exception with multiple nodes)

hadoop MapReduce例外与多个节点(hadoop MapReduce exception with multiple nodes)

我包含一个导致问题的独立命令序列。 我有一个正在进行中的群集配置。 还值得注意的是,当我没有在yarn-site.xmlmapred-site.xml配置所有的资源和历史管理器时,这个相同的例子运行良好。

问题Cannot create directory /user/deploy/QuasiMonteCarlo_1391523248477_997612342/in似乎是某处的错误文件路径前缀,因为相关的用户目录是:

  • /home/deploy/
  • /home/deploy/hdfs
  • /home/deploy/hdfs/name
  • /home/deploy/hdfs/data

那么它是如何尝试访问/user/deploy

deploy@olympus:~$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [olympus]
olympus: starting namenode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-namenode-olympus.out
hera: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-hera.out
olympus: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-olympus.out
zeus: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-zeus.out
poseidon: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-poseidon.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-secondarynamenode-olympus.out
starting yarn daemons
starting resourcemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-resourcemanager-olympus.out
olympus: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-olympus.out
zeus: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-zeus.out
hera: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-hera.out
poseidon: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-poseidon.out
deploy@olympus:~$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
Number of Maps  = 2
Samples per Map = 5
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/deploy/QuasiMonteCarlo_1391523248477_997612342/in. Name node is in safe mode.
The reported blocks 5 has reached the threshold 0.9990 of total blocks 5. The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically in 4 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3355)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3330)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:724)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59598)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

    at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2394)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2365)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:817)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:813)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:813)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:806)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:282)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

I include a self-contained sequence of commands that lead to the issue. I have a work-in-progress Cluster configuration. It is also worth noting that this same example worked fine when I didn't have all the resource and history managers configured in yarn-site.xml and mapred-site.xml.

The problem Cannot create directory /user/deploy/QuasiMonteCarlo_1391523248477_997612342/in appears to be a wrong file path prefix somewhere because the relevant user directories are:

  • /home/deploy/
  • /home/deploy/hdfs
  • /home/deploy/hdfs/name
  • /home/deploy/hdfs/data

So how come it tries accessing /user/deploy ?

deploy@olympus:~$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [olympus]
olympus: starting namenode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-namenode-olympus.out
hera: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-hera.out
olympus: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-olympus.out
zeus: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-zeus.out
poseidon: starting datanode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-datanode-poseidon.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/dev/hadoop/2.2.0/logs/hadoop-deploy-secondarynamenode-olympus.out
starting yarn daemons
starting resourcemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-resourcemanager-olympus.out
olympus: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-olympus.out
zeus: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-zeus.out
hera: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-hera.out
poseidon: starting nodemanager, logging to /opt/dev/hadoop/2.2.0/logs/yarn-deploy-nodemanager-poseidon.out
deploy@olympus:~$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5
Number of Maps  = 2
Samples per Map = 5
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/deploy/QuasiMonteCarlo_1391523248477_997612342/in. Name node is in safe mode.
The reported blocks 5 has reached the threshold 0.9990 of total blocks 5. The number of live datanodes 4 has reached the minimum number 0. Safe mode will be turned off automatically in 4 seconds.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3355)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3330)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:724)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:502)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59598)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042)

    at org.apache.hadoop.ipc.Client.call(Client.java:1347)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy9.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:467)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2394)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2365)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:817)
    at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:813)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:813)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:806)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1933)
    at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:282)
    at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:212)

原文:https://stackoverflow.com/questions/21555041
更新时间:2022-08-25 12:08

最满意答案

你没有在问题中提到它,但我猜你特意想到使用一对来定义Reader因为把它当作一种提供固定环境的方式也是有意义的。 假设我们在Reader monad中有一个较早的结果:

return 2 :: Reader Integer Integer

我们可以使用此结果在固定环境中进行进一步计算( Monad方法保证它在整个(>>=)链中保持固定):

GHCi> runReader (return 2 >>= \x -> Reader (\r -> x + r)) 3
5

(如果在上面的表达式中替换return(>>=)runReader的定义并简化它,您将看到它如何减少到2 + 3

现在,让我们按照您的建议并定义:

newtype Env r a = Env { runEnv :: (r, a) }

如果我们有一个类型为r的环境和一个类型a a的先前结果,我们可以用它们制作一个Env ra ...

Env (3, 2) :: Env Integer Integer

......我们也可以从中得到一个新的结果:

GHCi> (\(r, x) -> x + r) . runEnv $ Env (3, 2)
5

那么问题是我们是否可以通过Monad接口捕获这种模式。 答案是不。 虽然有成对的Monad实例,但它做了一些完全不同的事情:

newtype Writer r a = Writer { Writer :: (r, a) }

instance Monoid r => Monad (Writer r) where
    return x = (mempty, x)
    m >>= f = Writer 
        . (\(r, x) -> (\(s, y) -> (mappend r s, y)) $ f x)
        $ runWriter m

需要Monoid约束,以便我们可以使用mempty (它解决了你注意到r_unknown从无处创建一个r_unknown问题)和mappend (这使得有可能以不同的方式组合对的第一个元素)违反monad法律)。 然而,这个Monad实例与Reader实例有很大的不同。 该对的第一个元素不是固定的(它可以改变,因为我们mappend其他生成的值加到它上面)并且我们不使用它来计算该对的第二个元素(在上面的定义中, y不是既不依赖于r也不依赖于s )。 Writer是记录员; 这里的r值是输出,而不是输入。


然而,有一种方法,你的直觉是合理的:我们不能使用一对像读者一样的monad,但我们可以做一个类似读者的co monad。 ComonadComonad就是你将Monad界面颠倒过来所得到的:

-- This is slightly different than what you'll find in Control.Comonad,
-- but it boils down to the same thing.
class Comonad w where
    extract :: w a -> a                 -- compare with return
    (=>>) :: w a -> (w a -> b) -> w b   -- compare with (>>=)

我们可以给Env我们放弃了一个Comonad实例:

newtype Env r a = Env { runEnv :: (r, a) }

instance Comonad (Env r) where
    extract (Env (_, x)) = x
    w@(Env (r, _)) =>> f = Env (r, f w)

这允许我们从(=>>)开始编写2 + 3示例:

GHCi> runEnv $ Env (3, 2) =>> ((\(r, x) -> x + r) . runEnv) 
(3,5)

一种了解其工作原理的方法是注意a -> Reader rb函数(即你给Reader的内容(>>=) )与Env ra -> b one(即你给出的内容)基本相同到Env(=>>) ):

a -> Reader r b
a -> (r -> b)     -- Unwrap the Reader result
r -> (a -> b)     -- Flip the function
(r, a) -> b       -- Uncurry the function
Env r a -> b      -- Wrap the argument pair

作为进一步的证据,这里是一个将一个变为另一个的函数:

GHCi> :t \f -> \w -> (\(r, x) -> runReader (f x) r) $ runEnv w
\f -> \w -> (\(r, x) -> runReader (f x) r) $ runEnv w
  :: (t -> Reader r a) -> Env r t -> a
GHCi> -- Or, equivalently:
GHCi> :t \f -> uncurry (flip (runReader . f)) . runEnv
\f -> uncurry (flip (runReader . f)) . runEnv
  :: (a -> Reader r c) -> Env r a -> c

为了总结,这里有一个稍长的例子,并排ReaderEnv版本:

GHCi> :{
GHCi| flip runReader 3 $
GHCi|     return 2 >>= \x ->
GHCi|     Reader (\r -> x ^ r) >>= \y ->
GHCi|     Reader (\r -> y - r)
GHCi| :}
5
GHCi> :{
GHCi| extract $
GHCi|     Env (3, 2) =>> (\w ->
GHCi|     (\(r, x) -> x ^ r) $ runEnv w) =>> (\z ->
GHCi|     (\(r, x) -> x - r) $ runEnv z)
GHCi| :}
5

You didn't mention it in the question, but I guess you thought specifically of using a pair for defining Reader because it also makes sense to think of that as a way of providing a fixed environment. Let's say we have an earlier result in the Reader monad:

return 2 :: Reader Integer Integer

We can use this result to do further calculations with the fixed environment (and the Monad methods guarantee it remains fixed throughout the chain of (>>=)):

GHCi> runReader (return 2 >>= \x -> Reader (\r -> x + r)) 3
5

(If you substitute the definitions of return, (>>=) and runReader in the expression above and simplify it, you will see exactly how it reduces to 2 + 3.)

Now, let's follow your suggestion and define:

newtype Env r a = Env { runEnv :: (r, a) }

If we have an environment of type r and a previous result of type a, we can make an Env r a out of them...

Env (3, 2) :: Env Integer Integer

... and we can also get a new result from that:

GHCi> (\(r, x) -> x + r) . runEnv $ Env (3, 2)
5

The question, then, is whether we can capture this pattern through the Monad interface. The answer is no. While there is a Monad instance for pairs, it does something quite different:

newtype Writer r a = Writer { Writer :: (r, a) }

instance Monoid r => Monad (Writer r) where
    return x = (mempty, x)
    m >>= f = Writer 
        . (\(r, x) -> (\(s, y) -> (mappend r s, y)) $ f x)
        $ runWriter m

The Monoid constraint is needed so that we can use mempty (which solves the problem that you noticed of having to create a r_unknown out of nowhere) and mappend (which makes it possible to combine the first elements of the pair in a way that doesn't violate the monad laws). This Monad instance, however, does something very different than what the Reader one does. The first element of the pair isn't fixed (it is subject to change, as we mappend other generated values to it) and we don't use it to compute the second element of the pair (in the definition above, y does not depend neither on r nor on s). Writer is a logger; the r values here are output, not input.


There is one way, however, in which your intuition is justified: we can't make a reader-like monad using a pair, but we can make a reader-like comonad. To put it very loosely, Comonad is what you get when you turn the Monad interface upside down:

-- This is slightly different than what you'll find in Control.Comonad,
-- but it boils down to the same thing.
class Comonad w where
    extract :: w a -> a                 -- compare with return
    (=>>) :: w a -> (w a -> b) -> w b   -- compare with (>>=)

We can give the Env we had abandoned a Comonad instance:

newtype Env r a = Env { runEnv :: (r, a) }

instance Comonad (Env r) where
    extract (Env (_, x)) = x
    w@(Env (r, _)) =>> f = Env (r, f w)

That allows us to write the 2 + 3 example from the beginning in terms of (=>>):

GHCi> runEnv $ Env (3, 2) =>> ((\(r, x) -> x + r) . runEnv) 
(3,5)

One way to see why this works is noting that an a -> Reader r b function (i.e. what you give to Reader's (>>=)) is essentially the same thing that an Env r a -> b one (i.e. what you give to Env's (=>>)):

a -> Reader r b
a -> (r -> b)     -- Unwrap the Reader result
r -> (a -> b)     -- Flip the function
(r, a) -> b       -- Uncurry the function
Env r a -> b      -- Wrap the argument pair

As further evidence of that, here is a function that changes one into the other:

GHCi> :t \f -> \w -> (\(r, x) -> runReader (f x) r) $ runEnv w
\f -> \w -> (\(r, x) -> runReader (f x) r) $ runEnv w
  :: (t -> Reader r a) -> Env r t -> a
GHCi> -- Or, equivalently:
GHCi> :t \f -> uncurry (flip (runReader . f)) . runEnv
\f -> uncurry (flip (runReader . f)) . runEnv
  :: (a -> Reader r c) -> Env r a -> c

To wrap things up, here is a slightly longer example, with Reader and Env versions side-by-side:

GHCi> :{
GHCi| flip runReader 3 $
GHCi|     return 2 >>= \x ->
GHCi|     Reader (\r -> x ^ r) >>= \y ->
GHCi|     Reader (\r -> y - r)
GHCi| :}
5
GHCi> :{
GHCi| extract $
GHCi|     Env (3, 2) =>> (\w ->
GHCi|     (\(r, x) -> x ^ r) $ runEnv w) =>> (\z ->
GHCi|     (\(r, x) -> x - r) $ runEnv z)
GHCi| :}
5

相关问答

更多

相关文章

更多

最新问答

更多
  • 您如何使用git diff文件,并将其应用于同一存储库的副本的本地分支?(How do you take a git diff file, and apply it to a local branch that is a copy of the same repository?)
  • 将长浮点值剪切为2个小数点并复制到字符数组(Cut Long Float Value to 2 decimal points and copy to Character Array)
  • OctoberCMS侧边栏不呈现(OctoberCMS Sidebar not rendering)
  • 页面加载后对象是否有资格进行垃圾回收?(Are objects eligible for garbage collection after the page loads?)
  • codeigniter中的语言不能按预期工作(language in codeigniter doesn' t work as expected)
  • 在计算机拍照在哪里进入
  • 使用cin.get()从c ++中的输入流中丢弃不需要的字符(Using cin.get() to discard unwanted characters from the input stream in c++)
  • No for循环将在for循环中运行。(No for loop will run inside for loop. Testing for primes)
  • 单页应用程序:页面重新加载(Single Page Application: page reload)
  • 在循环中选择具有相似模式的列名称(Selecting Column Name With Similar Pattern in a Loop)
  • System.StackOverflow错误(System.StackOverflow error)
  • KnockoutJS未在嵌套模板上应用beforeRemove和afterAdd(KnockoutJS not applying beforeRemove and afterAdd on nested templates)
  • 散列包括方法和/或嵌套属性(Hash include methods and/or nested attributes)
  • android - 如何避免使用Samsung RFS文件系统延迟/冻结?(android - how to avoid lag/freezes with Samsung RFS filesystem?)
  • TensorFlow:基于索引列表创建新张量(TensorFlow: Create a new tensor based on list of indices)
  • 企业安全培训的各项内容
  • 错误:RPC失败;(error: RPC failed; curl transfer closed with outstanding read data remaining)
  • C#类名中允许哪些字符?(What characters are allowed in C# class name?)
  • NumPy:将int64值存储在np.array中并使用dtype float64并将其转换回整数是否安全?(NumPy: Is it safe to store an int64 value in an np.array with dtype float64 and later convert it back to integer?)
  • 注销后如何隐藏导航portlet?(How to hide navigation portlet after logout?)
  • 将多个行和可变行移动到列(moving multiple and variable rows to columns)
  • 提交表单时忽略基础href,而不使用Javascript(ignore base href when submitting form, without using Javascript)
  • 对setOnInfoWindowClickListener的意图(Intent on setOnInfoWindowClickListener)
  • Angular $资源不会改变方法(Angular $resource doesn't change method)
  • 在Angular 5中不是一个函数(is not a function in Angular 5)
  • 如何配置Composite C1以将.m和桌面作为同一站点提供服务(How to configure Composite C1 to serve .m and desktop as the same site)
  • 不适用:悬停在悬停时:在元素之前[复制](Don't apply :hover when hovering on :before element [duplicate])
  • 常见的python rpc和cli接口(Common python rpc and cli interface)
  • Mysql DB单个字段匹配多个其他字段(Mysql DB single field matching to multiple other fields)
  • 产品页面上的Magento Up出售对齐问题(Magento Up sell alignment issue on the products page)