AWS Application Load Balancer中的Eureka服务器运行状况检查失败(Eureka server health check fails in AWS Application Load Balancer)
我在ECS集群中配置了eureka服务器,并为其服务使用了应用程序负载均衡器。 eureka配置为使用属性文件接受身份验证,如下所示。
security.user.name=xxxxx security.user.password=yyyy
在负载均衡器中,我在端口8761上为eureka创建了一个目标组。我给了健康检查的'/'url。 但负载均衡器的运行状况检查失败,并出现以下错误。
Health checks failed with these codes: [401]
这指定ALB由于身份验证而无法通过运行状况检查。 (删除身份验证部分可以工作,但会导致其他一些错误)。 有没有办法通过ALB的健康检查?
I have configures a eureka server in ECS cluster, and used an application load balancer for it's service. The eureka is configured to accept authentication using the property file as below.
security.user.name=xxxxx security.user.password=yyyy
In the load balancer i created a target group for eureka on port 8761. And i gave the '/' url for the health check. But the load balancer's health check fails with the following error.
Health checks failed with these codes: [401]
This specifies that the ALB fails to pass the health check because of the authentication. (Removing the authentication part works but it causes some other errors). Is there a way to pass the health check in ALB ?
原文:
最满意答案
这是我在评论中提到的方式:它使用文件对象来跳过您在开始时需要跳过的自定义脏数据。 您将文件偏移量放在文件中的适当位置,其中
read_fwf
只是完成工作:with open(rawfile, 'r') as data_file: while(data_file.read(1)=='#'): last_pound_pos = data_file.tell() data_file.readline() data_file.seek(last_pound_pos) df = pd.read_fwf(data_file) df Out[88]: i mult stat (+/-) syst (+/-) Q2 x x.1 Php 0 0 0.322541 0.018731 0.026681 1.250269 0.037525 0.148981 0.104192 1 1 0.667686 0.023593 0.033163 1.250269 0.037525 0.150414 0.211203 2 2 0.766044 0.022712 0.037836 1.250269 0.037525 0.149641 0.316589 3 3 0.668402 0.024219 0.031938 1.250269 0.037525 0.148027 0.415451 4 4 0.423496 0.020548 0.018001 1.250269 0.037525 0.154227 0.557743 5 5 0.237175 0.023561 0.007481 1.250269 0.037525 0.159904 0.750544
This is the way I'm mentioning in the comment: it uses a file object to skip the custom dirty data you need to skip at the beginning. You land the file offset at the appropriate location in the file where
read_fwf
simply does the job:with open(rawfile, 'r') as data_file: while(data_file.read(1)=='#'): last_pound_pos = data_file.tell() data_file.readline() data_file.seek(last_pound_pos) df = pd.read_fwf(data_file) df Out[88]: i mult stat (+/-) syst (+/-) Q2 x x.1 Php 0 0 0.322541 0.018731 0.026681 1.250269 0.037525 0.148981 0.104192 1 1 0.667686 0.023593 0.033163 1.250269 0.037525 0.150414 0.211203 2 2 0.766044 0.022712 0.037836 1.250269 0.037525 0.149641 0.316589 3 3 0.668402 0.024219 0.031938 1.250269 0.037525 0.148027 0.415451 4 4 0.423496 0.020548 0.018001 1.250269 0.037525 0.154227 0.557743 5 5 0.237175 0.023561 0.007481 1.250269 0.037525 0.159904 0.750544
相关问答
更多-
不,你不需要分配列名,也不需要他们访问任何元素。 In [12]: df = pd.DataFrame([0]) In [13]: df.ix[0,0] Out[13]: 0 In [14]: df[0][0] Out[14]: 0 事实上,你可以想象列已经有一个名字 - 它是整数0.看看你提供一个名字会发生什么 In [15]: df #Before naming the column Out[15]: 0 0 0 In [16]: df.columns = ['ColA'] In ...
-
我认为你需要参数header=None或names=range(266)来设置read_csv中列的默认名称: url = "http://archive.ics.uci.edu/ml/machine-learning-databases/semeion/semeion.data" df = pd.read_csv(url, sep = r"\s+", header=None) df = pd.read_csv(url, sep = r"\s+", names=range(266)) I think y ...
-
如何通过数据帧值在python pandas中选择列名?(How to select column names in python pandas by dataframe values?)[2023-02-17]
一个班轮: >>> value = np.nan >>> df.reindex_axis(df.columns[::-1], axis=1)\ # flip vertically .idxmax(axis=1)\ # find last(now first) True value .reset_index()\ # get index for the next step ... -
也许: >>> j = df.columns.get_level_values(1).isin(['Net', 'Upper', 'Zsore']) >>> df.loc[:,j] A B Net Upper Zsore Answer option More than once a day 0% 0.22% 65 Once a da ...
-
Python Pandas:按列名将更改应用于特定列(Python Pandas: Applying changes to specific columns by column names)[2023-06-05]
你可以这样做,所以你可以在列上调用str.startswith来获取感兴趣的cols,然后fillna在所有这些列上调用fillna : In [152]: cols = df.columns[df.columns.str.startswith('dummy')] df[cols] = df[cols].fillna(method='pad') df Out[152]: name flag dummy_D random ID dummy_S dummy_T 0 M ... -
如何将列名转换为pandas中的列值 - python(how to convert column names into column values in pandas - python)[2022-08-19]
您可以使用: print (df.T.unstack().reset_index(level=1, name='c1') .rename(columns={'level_1':'c2'})[['c1','c2']]) c1 c2 x 1 a x 3 b y 2 a y 4 b 要么: print (df.stack().reset_index(level=1, name='c1') .rename(c ... -
如果您有数据帧 df = pd.DataFrame([['header1=123', 'header2=123', 'header3=123'],['header1=123', 'header3=123', 'header4=123']]) 然后,你可以通过=来分割数据,然后创建一个字典,而pd.DataFrame的构造函数将负责其余的 new = [[j.split('=') for j in i] for i in df.values ] di=[{k:j for k,j in i} for i i ...
-
你的一个yyy缺少额外的y ;-) 使用df.columns.str.contains并使用loc过滤。 df.loc[:, df.columns.str.contains('|'.join(mylist))] yyyy nnn mmm 0 10 5 5 1 9 3 4 2 8 7 0 这应该是您正在寻找的结果。 对结果进行的后续to_csv调用将生成CSV文件。 如果你的yyy元素是一个拼写错误,你实际上意味着yyyy ,那么只是将列 ...
-
这是我在评论中提到的方式:它使用文件对象来跳过您在开始时需要跳过的自定义脏数据。 您将文件偏移量放在文件中的适当位置,其中read_fwf只是完成工作: with open(rawfile, 'r') as data_file: while(data_file.read(1)=='#'): last_pound_pos = data_file.tell() data_file.readline() data_file.seek(last_pound_pos) ...
-
你可以这样做: df.set_index("id").apply(lambda row: ' '.join(row[row == 1].index), axis = 1) #id #100 A #101 B C #102 A #103 D E #dtype: object You can do something like this: df.set_index("id").apply(lambda row: ' '.join(row[row == 1].index), a ...