Gluon Mobile 2.2.0:如何更改ProgressBar / ProgressIndicator颜色?(Gluon Mobile 2.2.0 : How to change ProgressBar/ProgressIndicator color?)
com.gluonhq.charm.glisten.control.ProgressIndicator/ProgressBar
默认颜色为材质设计蓝色 。如何改变颜色?
我尝试了不同的JavaFX CSS属性,但它不起作用。
Default color of the
com.gluonhq.charm.glisten.control.ProgressIndicator/ProgressBar
is material design blue.How to change the color?
I tried different JavaFX CSS properties, but it doesn't work.
原文:https://stackoverflow.com/questions/36953858
最满意答案
更新:
假设
Date
是索引(不是常规列):源词典:
In [70]: d2 Out[70]: {'CCC': Open High Low Close Volume Adj Close Date 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200 16.965361 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600 16.896516 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500 17.162061, 'CLSN': Open High Low Close Volume Adj Close Date 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 2016-01-05 1.89 1.94 1.85 1.90 50200 1.90}
解:
In [73]: pd.Panel(d2).swapaxes(0, 2).to_frame().reset_index(level=0).sort_index() Out[73]: Date Open High Low Close Volume Adj Close minor CCC 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200.0 16.965361 CCC 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600.0 16.896516 CCC 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500.0 17.162061 CLSN 2015-12-31 1.920000 1.990000 1.870000 1.920000 79600.0 1.920000 CLSN 2016-01-04 1.930000 1.990000 1.870000 1.930000 39700.0 1.930000 CLSN 2016-01-05 1.890000 1.940000 1.850000 1.900000 50200.0 1.900000
或者,您可以将
Date
作为索引的一部分:In [74]: pd.Panel(d2).swapaxes(0, 2).to_frame().sort_index() Out[74]: Open High Low Close Volume Adj Close Date minor 2015-12-31 CCC 17.270000 17.389999 17.120001 17.250000 177200.0 16.965361 CLSN 1.920000 1.990000 1.870000 1.920000 79600.0 1.920000 2016-01-04 CCC 17.000000 17.219999 16.600000 17.180000 371600.0 16.896516 CLSN 1.930000 1.990000 1.870000 1.930000 39700.0 1.930000 2016-01-05 CCC 17.190001 17.530001 17.059999 17.450001 417500.0 17.162061 CLSN 1.890000 1.940000 1.850000 1.900000 50200.0 1.900000
旧答案 - 它假定
Date
是常规列(不是索引)试试这个:In [59]: pd.Panel(d).swapaxes(0, 2).to_frame().reset_index('major', drop=True).sort_index() Out[59]: Date Open High Low Close Volume Adj Close minor CCC 2015-12-31 17.27 17.39 17.12 17.25 177200 16.9654 CCC 2016-01-04 17 17.22 16.6 17.18 371600 16.8965 CCC 2016-01-05 17.19 17.53 17.06 17.45 417500 17.1621 CLSN 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 CLSN 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 CLSN 2016-01-05 1.89 1.94 1.85 1.9 50200 1.9
其中
d
是nested dictionary
:In [60]: d Out[60]: {'CCC': Date Open High Low Close Volume Adj Close 0 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200 16.965361 1 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600 16.896516 2 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500 17.162061, 'CLSN': Date Open High Low Close Volume Adj Close 0 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 1 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 2 2016-01-05 1.89 1.94 1.85 1.90 50200 1.90}
UPDATE:
assuming that
Date
is an index (not a regular column):Source dictionary:
In [70]: d2 Out[70]: {'CCC': Open High Low Close Volume Adj Close Date 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200 16.965361 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600 16.896516 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500 17.162061, 'CLSN': Open High Low Close Volume Adj Close Date 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 2016-01-05 1.89 1.94 1.85 1.90 50200 1.90}
Solution:
In [73]: pd.Panel(d2).swapaxes(0, 2).to_frame().reset_index(level=0).sort_index() Out[73]: Date Open High Low Close Volume Adj Close minor CCC 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200.0 16.965361 CCC 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600.0 16.896516 CCC 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500.0 17.162061 CLSN 2015-12-31 1.920000 1.990000 1.870000 1.920000 79600.0 1.920000 CLSN 2016-01-04 1.930000 1.990000 1.870000 1.930000 39700.0 1.930000 CLSN 2016-01-05 1.890000 1.940000 1.850000 1.900000 50200.0 1.900000
alternatively you can leave
Date
as part of your index:In [74]: pd.Panel(d2).swapaxes(0, 2).to_frame().sort_index() Out[74]: Open High Low Close Volume Adj Close Date minor 2015-12-31 CCC 17.270000 17.389999 17.120001 17.250000 177200.0 16.965361 CLSN 1.920000 1.990000 1.870000 1.920000 79600.0 1.920000 2016-01-04 CCC 17.000000 17.219999 16.600000 17.180000 371600.0 16.896516 CLSN 1.930000 1.990000 1.870000 1.930000 39700.0 1.930000 2016-01-05 CCC 17.190001 17.530001 17.059999 17.450001 417500.0 17.162061 CLSN 1.890000 1.940000 1.850000 1.900000 50200.0 1.900000
OLD answer - it assumes that
Date
is a regular column (not an index) Try this:In [59]: pd.Panel(d).swapaxes(0, 2).to_frame().reset_index('major', drop=True).sort_index() Out[59]: Date Open High Low Close Volume Adj Close minor CCC 2015-12-31 17.27 17.39 17.12 17.25 177200 16.9654 CCC 2016-01-04 17 17.22 16.6 17.18 371600 16.8965 CCC 2016-01-05 17.19 17.53 17.06 17.45 417500 17.1621 CLSN 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 CLSN 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 CLSN 2016-01-05 1.89 1.94 1.85 1.9 50200 1.9
where
d
is yournested dictionary
:In [60]: d Out[60]: {'CCC': Date Open High Low Close Volume Adj Close 0 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200 16.965361 1 2016-01-04 17.000000 17.219999 16.600000 17.180000 371600 16.896516 2 2016-01-05 17.190001 17.530001 17.059999 17.450001 417500 17.162061, 'CLSN': Date Open High Low Close Volume Adj Close 0 2015-12-31 1.92 1.99 1.87 1.92 79600 1.92 1 2016-01-04 1.93 1.99 1.87 1.93 39700 1.93 2 2016-01-05 1.89 1.94 1.85 1.90 50200 1.90}
相关问答
更多-
只是为了在这里有一个答案(已在评论中给出),这里(再次): import pandas as pd A = pd.DataFrame([[1, 5, 2, 8, 2], [2, 4, 4, 20, 2], [3, 3, 1, 20, 2], [4, 2, 2, 1, 0], [5, 1, 4, -5, -4], [1, 5, 2, 2, -20], [2, 4, 4, 3, 0], [3, 3, 1, -1, -1], [4, 2, 2, 0, 0 ...
-
我认为你非常接近。 使用groupby和to_dict : df = df.groupby('Name')[['Chain','Food','Healthy']] .apply(lambda x: x.set_index('Chain').to_dict(orient='index')) .to_dict() print (df) {'George': {'KFC': {'Healthy': False, 'Food': 'chicken'}, 'McD ...
-
您需要循环或递归嵌套字典,通过其所有级别。 除非它可能非常深(如数百个级别),或者如此宽,以至于小的性能因素有所不同,递归可能是最简单的: def defaultify(d): if not isinstance(d, dict): return d return defaultdict(lambda: None, {k: defaultify(v) for k, v in d.items()}) 或者,如果您希望它与所有映射一起使用,而不仅仅是dicts,您可以在isin ...
-
对我来说工作: d = {l: df.xs(l)['clump_thickness'].to_dict() for l in df.index.levels[0]} 另一个类似于DataFrame和MultiIndex的解决方案dict ,但是Series必要过滤器列: d = df.groupby(level=0).apply(lambda df: df.xs(df.name).clump_thickness.to_dict()).to_dict() print (d) {0: {0: 274.0, ...
-
更新: 假设Date是索引(不是常规列): 源词典: In [70]: d2 Out[70]: {'CCC': Open High Low Close Volume Adj Close Date 2015-12-31 17.270000 17.389999 17.120001 17.250000 177200 16.965361 2016-01-04 17.000000 17.219999 16.600000 ...
-
考虑使用字典理解来构建具有元组键的字典。 然后,使用pandas的MultiIndex.from_tuples 。 ast下面用于从字符串重建原始字典(忽略你的结尾)。 import pandas as pd import ast origDict = ast.literal_eval(""" {'l1':{'c1': {'a': 0, 'b': 1, 'c': 2}, 'c2': {'a': 3, 'b': 4, 'c': 5}}, 'l2':{'c1': {'a': 0, 'b': 1 ...
-
可以使用toPandas方法将Spark DataFrame转换为pandas DataFrame。 A Spark DataFrame can be converted to a pandas DataFrame with the toPandas method.
-
我们将每个嵌套列表值转换为一个DataFrame,然后调用pd.concat 。 columns = ['userid', 'score', 'related'] df_dict = {k : pd.DataFrame(v, columns=columns) for k, v in dict1.items()} df = (pd.concat(df_dict) .reset_index(level=1, drop=True) .rename_axis('memberid' ...
-
如何将具有多个值的字典转换为pandas数据帧?(How to convert dictionary with multiple values into a pandas dataframe?)[2022-10-16]
试试.from_dict() : import pandas as pd e = {} for i in range(0, len("length_of_the_row")): e[i] = "a", "b", "c", "d" df = pd.DataFrame.from_dict(e, orient='index') Try .from_dict(): import pandas as pd e = {} for i in range(0, len("length_of_the_row")) ... -
这应该做到这一点 {k: [[i.strftime('%d %b'), v] for i, v in s.iteritems()] for k, s in df.set_index('created').iteritems()} {'moans': [['16 Dec', 0], ['17 Dec', 0], ['18 Dec', 0], ['19 Dec', 0], ['20 Dec', 6], ['21 Dec', 0], ['22 Dec', 0]], 'thanks': ...