Eureka服务器和eureka客户端分离tomcat服务器(Eureka server and eureka client to separate tomcat servers)
我正在寻找一种能够将Eureka服务器部署到与Eureka客户端不同的tomcat服务器的方法。
这是客户端application.yml:
eureka: client: registryFetchIntervalSeconds: 5 instance: preferIpAddress: true leaseRenewalIntervalInSeconds: 10 server: port: 8080 spring: application.name: my-client jmx: default-domain: my-client
和服务器application.yml看起来像:
server: port: 8761 eureka: client: registerWithEureka: false fetchRegistry: false
如果我将它们部署到同一个tomcat-server,它可以正常工作。 但是如果我只用服务器启动tomcat,然后用客户端启动服务器,我会收到以下错误:
2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true 2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Application version is -1: true 2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server 2017-03-09 16:18:04.740 WARN 7693 --- [on(2)-127.0.0.1] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with status code 404; retrying on another server if available 2017-03-09 16:18:04.745 ERROR 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYCLIENT-CLIENT/192.168.196.141:my-client:8080 - was unable to refresh its cache! status = Cannot execute request on any known server com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111) ~[eureka-client-1.4.12.jar:1.4.12] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) ~[eureka-client-1.4.12.jar:1.4.12]
bootstrap.yml和application.yml有什么区别?
任何帮助,将不胜感激!
I'm looking for a way to be able to deploy the Eureka server to a different tomcat server than the Eureka client.
this is the client application.yml:
eureka: client: registryFetchIntervalSeconds: 5 instance: preferIpAddress: true leaseRenewalIntervalInSeconds: 10 server: port: 8080 spring: application.name: my-client jmx: default-domain: my-client
and the server application.yml looks like:
server: port: 8761 eureka: client: registerWithEureka: false fetchRegistry: false
It works perfectly fine if I deploy them to the same tomcat-server. But if I start the tomcat with the server only and later start the server with the client, I get the following error:
2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Registered Applications size is zero : true 2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Application version is -1: true 2017-03-09 16:17:58.496 INFO 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : Getting all instance registry info from the eureka server 2017-03-09 16:18:04.740 WARN 7693 --- [on(2)-127.0.0.1] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failure with status code 404; retrying on another server if available 2017-03-09 16:18:04.745 ERROR 7693 --- [on(2)-127.0.0.1] com.netflix.discovery.DiscoveryClient : DiscoveryClient_MYCLIENT-CLIENT/192.168.196.141:my-client:8080 - was unable to refresh its cache! status = Cannot execute request on any known server com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server at com.netflix.discovery.shared.transport.decorator.RetryableEurekaHttpClient.execute(RetryableEurekaHttpClient.java:111) ~[eureka-client-1.4.12.jar:1.4.12] at com.netflix.discovery.shared.transport.decorator.EurekaHttpClientDecorator.getApplications(EurekaHttpClientDecorator.java:134) ~[eureka-client-1.4.12.jar:1.4.12]
What is the difference between bootstrap.yml and application.yml?
Any help would be appreciated!
原文:https://stackoverflow.com/questions/42699081
最满意答案
我相信问题是你的
id
是某个类的成员。 因此Spark会尝试为您序列化整个类。 为了防止这种情况,只需将您的id
字段分配给某个本地值:def doParse = { val localId = id sc.cassandraTable("keyspace","table").where("some_restriction=random") .filter(x=> (Json.parse(x.get[String]("content"))\"id") .toString.contains(localId)) }
UPD所以请按原样在REPL中定义上述方法。 然后调用它:
scala> doParse
这将限制Spark尝试序列化的范围。
I believe the problem is that your
id
is a member of some class. So Spark tries to serialize the whole class for you. To prevent that just assign yourid
field to some local value:def doParse = { val localId = id sc.cassandraTable("keyspace","table").where("some_restriction=random") .filter(x=> (Json.parse(x.get[String]("content"))\"id") .toString.contains(localId)) }
UPD So please define the method above in REPL as is. And then invoke it like:
scala> doParse
This will limit the scope that Spark tries to serialise.
相关问答
更多-
当用1L Spark替换l ,不再尝试使用方法/变量序列化类,因此不会抛出错误。 您应该能够通过将val x: NonSerializableThing = ...标记为瞬态来修复 @transient val x: NonSerializableThing = ... 这意味着当类被序列化时,应该忽略该变量。 When you replace l with 1L Spark no longer tries to serialize the class with the method / variable ...
-
Lambda中捕获的SparkContext引起的序列化错误 序列化问题是由 val addressRDD = sc.cassandraTable("local_keyspace", "employee_address") 这部分在序列化的lambda中使用: val id = data .map(s => (s,getID(s))) 所有RDD转换都代表远程执行的代码,这意味着它们的整个内容必须是可序列化的。 Spark Context不是可序列化的,但“getIDs”必须有效,因此存在异常。 基 ...
-
任务不能在Spark上序列化(Task not serializable at Spark)[2023-08-27]
我用Java代码多次遇到这个问题。 虽然我使用的是Java序列化,但我会创建包含Serializable代码的类,或者如果你不想这样做,我会将Function作为类的静态成员。 这是解决方案的代码片段。 public class Test { private static Function s = new Function>() { @Override public Tuple2 call( ... -
显然, Rating不能是Serializable ,因为它包含对Spark结构(即SparkSession , SparkConf等)的引用作为属性。 这里的问题在于 JavaRDD
ratingsRD = spark.read().textFile("sample_movielens_ratings.txt") .javaRDD() .map(mapFunc); 如果你看一下mapFunc的定义,你就会返回一个Rating对象。 mapF ... -
我设法解决了这个问题。 问题在于我将类型安全配置引入了未被序列化的函数正在使用的类之一。 通过添加配置,这增加了总内存占用量并超过了64KB限制。 当我从类中删除配置对象时,它再次正常工作。 I managed to resolve this issue. The problem was that I introduced Typesafe config to one of the classes that were being used by the function that failed to be ...
-
任务不可序列化:使用Spark Streaming处理Json字符串(Task not serializable: Json strings processing using Spark Streaming)[2023-12-18]
看起来normalize方法是某个类的一部分。 在您在map操作中使用它的行中,Spark不仅需要序列化方法本身,还需要序列化整个实例。 最简单的解决方案是将normalize移动到某个单例对象: object JsonUtils { def normalize(json: String): String = ??? } 并调用: val callRDD = JSONstrings.map(JsonUtils.normalize(_)) It looks like normalize method ... -
任务不能在scala中序列化(Task not serializable in scala)[2021-11-28]
您不能将RDD视为本地集合。 所有针对它的操作都发生在分布式集群上。 为了工作,您在该rdd中运行的所有函数都必须是可序列化的。 该线 for (print1 <- src) { 在这里,您正在迭代RDD src,循环中的所有内容都必须序列化,因为它将在执行程序上运行。 然而,在你尝试运行sc.parallelize(虽然仍然在那个循环中),但SparkContext不是可序列化的。使用rdds和sparkcontext是你在驱动程序上执行的操作,在RDD操作中无法执行。 我完全确定你要完成的是什么,但它 ... -
很可能函数“doSomething”是在你的类上定义的,不能用于serilizable。 而是将“doSomething”函数移动到伴随对象(例如,使其静态)。 It was the dateFormatter, I placed it inside the partition loop and it works now. usersRDD.foreachPartition(part => { val id = userRow.id val dateFormatter = DateTimeF ...
-
我相信问题是你的id是某个类的成员。 因此Spark会尝试为您序列化整个类。 为了防止这种情况,只需将您的id字段分配给某个本地值: def doParse = { val localId = id sc.cassandraTable("keyspace","table").where("some_restriction=random") .filter(x=> (Json.parse(x.get[String]("content"))\"id") .toString.contain ...
-
Twitter Spark Stream Filtering:任务不可序列化的异常(Twitter Spark Stream Filtering: Task not serializable exception)[2022-11-16]
Spark-shell将代码封装在匿名类中,以序列化并将代码发送给worker。 有时知道被捕获的内容和范围是很棘手的。 如果你在spark-shell中复制/粘贴你的代码,即使你:paste在一起的行的顺序和数量(例如:paste )也会产生不同的类结构。 避免序列化问题的规则是将@transient标记为@transient所有元素。 在这个特定的情况下,我会将瞬态注释添加到conf , auth和tweets 。 The Spark-shell encapsulates the code in ano ...