首页
\
问答
\
HDFS将本地文件放入hdfs,但得到了UnresolvedAddressException(HDFS put a local file to hdfs but got UnresolvedAddressException)
HDFS将本地文件放入hdfs,但得到了UnresolvedAddressException(HDFS put a local file to hdfs but got UnresolvedAddressException)
我想把一个70G的文件放到hdfs中,所以我用'put'命令来做到这一点。 但是,我得到了以下例外。 我用相同的命令尝试了小文件,它工作。 有谁知道是什么问题? 谢谢!
WARN [DataStreamer for file /user/qzhao/data/sorted/WGC033800D_sorted.bam._COPYING_] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:127) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) put: java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1538) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112) at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:395) at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:327) at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:303) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:243) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:228) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:223) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:200) at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:259) at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) at org.apache.hadoop.fs.shell.Command.run(Command.java:154) at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
I want to put a 70G file to hdfs so I used 'put' command to do this. However, I got the following exception. I tried small size file with the same command, it works. Does anyone know what the problem could be? Thanks!
WARN [DataStreamer for file /user/qzhao/data/sorted/WGC033800D_sorted.bam._COPYING_] hdfs.DFSClient (DFSOutputStream.java:run(628)) - DataStreamer Exception java.nio.channels.UnresolvedAddressException at sun.nio.ch.Net.checkAddress(Net.java:127) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:644) at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192) at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529) at org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1526) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1328) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1281) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:526) put: java.nio.channels.ClosedChannelException at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1538) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112) at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.writeStreamToFile(CommandWithDestination.java:395) at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(CommandWithDestination.java:327) at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(CommandWithDestination.java:303) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:243) at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(CommandWithDestination.java:228) at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306) at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:278) at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument(CommandWithDestination.java:223) at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260) at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244) at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(CommandWithDestination.java:200) at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyCommands.java:259) at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190) at org.apache.hadoop.fs.shell.Command.run(Command.java:154) at org.apache.hadoop.fs.FsShell.run(FsShell.java:287) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84) at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
原文:https://stackoverflow.com/questions/36926453
更新时间:2023-02-14 09:02
最满意答案
使用
@JoinColumn
而不是@Column
:@ManyToOne @JoinColumn(name="LicenseeFK") private Licensee licensee;
Use
@JoinColumn
instead of@Column
:@ManyToOne @JoinColumn(name="LicenseeFK") private Licensee licensee;
相关问答
更多-
您的实体字段是映射的反面,因此您不必使用MappedBy声明,而是使用它 /** * Inversed side * @var int|null * @ORM\ManyToOne(targetEntity="BaseValue", inversedBy="field") * @ORM\JoinColumn(name="[your_name]", referencedColumnName="[id]", onDelete="CASCADE") */ ...
-
尝试从过滤器定义中删除stock @Filter(name="stockDailyRecordFilter", condition="name = 'My stock'") 更新因此这里的股票是相关表而不是主表。 您可以尝试加入以使用IN进行子选择 condition="stock_id in (select id from stock where name = 'My stock')" Try to remove the stock from filter definition @Filter(nam ...
-
使用@JoinColumn而不是@Column : @ManyToOne @JoinColumn(name="LicenseeFK") private Licensee licensee; Use @JoinColumn instead of @Column: @ManyToOne @JoinColumn(name="LicenseeFK") private Licensee licensee;
-
使用@JoinColumn(updatable = false)代替@Column(updatable = false) @JoinColumn(updatable = false) @Column(updatable = false) 。 Use @JoinColumn(updatable = false) instead of @Column(updatable = false).
-
JPA坚持ManyToOne(JPA Persist ManyToOne)[2022-12-31]
好吧,你可以简化你的代码,有这样的东西 @Transactional public void save(User user, String name) { Hometown hometown = getEntityManager().createQuery("SELECT h FROM Hometown h WHERE h.name = :name", Hometown.class).setParameter("name", name).getSingleResult(); if (home ... -
@可以使用@ManyToOne进行注册(@Embeddable with @ManyToOne)[2023-10-11]
您不能使用@AttributeOverride重命名外键列。 你必须使用@AssosiactionOverride @Entity @Table(name = "TEST") public class B { public long id; @AssociationOverride(name = "classB", joinColumns = @JoinColumn(name = "EMBEDDED1_ID")) @AttributeOverrides({ ... -
OneToMany / ManyToOne关系的Hibernate标准(Hibernate criteria for OneToMany/ManyToOne relationship)[2022-11-19]
您已在第3行条件中添加了"company.companyName "这样的空格。 这是错字吗? 如果不是那么它就是问题的原因。 I found the source of problem, it is a stupid error on my part, in my original code I put compny.companyName, I forgot an "a" in company, it works very well. -
谢谢您的回答。 我通过添加这些属性解决了这个问题spring.jpa.hibernate.ddl-auto = update spring.jpa.generate-ddl = true问题是spring-data无法更新架构。 Thank you for your answers. I solved this issue by adding those properties spring.jpa.hibernate.ddl-auto=update spring.jpa.generate-ddl=true ...
-
ManyToOne和OneToMany(ManyToOne and OneToMany)[2024-01-11]
1)在City类中添加City city字段并在其上面添加@ManyToOne和@JoinColumn anotations? 因此,我们将有两个表:country和city,city table将有country_id列。 我想你的意思是在City类中添加Country country字段,是的,如果你的目标是单向关系,这是正确的,但是joinColumn不应该在拥有实体中,它应该在没有拥有的实体中,所以你会有转到Country类并在那里添加一个城市列表,并使用带有连接列的@OneToMany对它们进行注 ... -
@ManyToOne关系的属性:如何建立相对计数器?(Property with @ManyToOne relation : How to make a relative counter?)[2022-02-14]
我可能会使用EventSubscriber。 订阅preUpdate事件并检查您提到的条件。 I'd probably go with EventSubscriber on this. Subscribe to preUpdate event and check for the conditions you mentioned.