Rails5 - ruby 2.2.3 // ActiveSupport :: TimeZone如何变化?(Rails5 - ruby 2.2.3 // how ActiveSupport::TimeZone changed?)
在昨天( 这里 )提出一个关于时区的问题,以下内容不适用于使用ruby 2.2.3运行的rails 5:
ActiveSupport::TimeZone.zones_map
在哪里可以阅读这个新版本之间的差异? 有什么办法可以做到这一点?
Asking a question yesterday (here) about time zones, the following is not working with rails 5 running on ruby 2.2.3 :
ActiveSupport::TimeZone.zones_map
Where can one read about the differences of use between with this new version ? What could be the way to achieve the same ?
原文:https://stackoverflow.com/questions/36172335
最满意答案
您可以使用“ 如何从列表中删除重复项”中的
set
方法,同时保留顺序? ,使用x[1]
作为唯一标识符:def unique_second_element(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x[1] in seen or seen_add(x[1]))]
请注意,如果您想保留最后一次出现,
OrderedDict
方法也会显示; 对于第一次发生,您必须将输入反向,然后再次反向输出。您可以通过支持
key
功能使其更加通用:def unique_preserve_order(seq, key=None): if key is None: key = lambda elem: elem seen = set() seen_add = seen.add augmented = ((key(x), x) for x in seq) return [x for k, x in augmented if not (k in seen or seen_add(k))]
然后使用
import operator unique_preserve_order(yourlist, key=operator.itemgetter(1))
演示:
>>> def unique_preserve_order(seq, key=None): ... if key is None: ... key = lambda elem: elem ... seen = set() ... seen_add = seen.add ... augmented = ((key(x), x) for x in seq) ... return [x for k, x in augmented if not (k in seen or seen_add(k))] ... >>> from pprint import pprint >>> import operator >>> yourlist = [(67, u'top-coldestcitiesinamerica'), (66, u'ecofriendlyideastocelebrateindependenceday-phpapp'), (65, u'a-b-c-ca-d-ab-ea-d-c-c'), (64, u'a-b-c-ca-d-ab-ea-d-c-c'), (63, u'alexandre-meybeck-faowhatisclimate-smartagriculture-backgroundopportunitiesandchallenges'), (62, u'ghgemissions'), (61, u'top-coldestcitiesinamerica'), (58, u'infographicthe-stateofdigitaltransformationaltimetergroup'), (57, u'culture'), (55, u'cas-k-ihaveanidea'), (54, u'trendsfor'), (53, u'batteryimpedance'), (52, u'evs-howey-full'), (51, u'bericht'), (49, u'classiccarinsurance'), (47, u'uploaded_file'), (46, u'x_file'), (45, u's-s-main'), (44, u'vehicle-propulsion'), (43, u'x_file')] >>> pprint(unique_preserve_order(yourlist, operator.itemgetter(1))) [(67, u'top-coldestcitiesinamerica'), (66, u'ecofriendlyideastocelebrateindependenceday-phpapp'), (65, u'a-b-c-ca-d-ab-ea-d-c-c'), (63, u'alexandre-meybeck-faowhatisclimate-smartagriculture-backgroundopportunitiesandchallenges'), (62, u'ghgemissions'), (58, u'infographicthe-stateofdigitaltransformationaltimetergroup'), (57, u'culture'), (55, u'cas-k-ihaveanidea'), (54, u'trendsfor'), (53, u'batteryimpedance'), (52, u'evs-howey-full'), (51, u'bericht'), (49, u'classiccarinsurance'), (47, u'uploaded_file'), (46, u'x_file'), (45, u's-s-main'), (44, u'vehicle-propulsion')]
You could use the
set
approach from How do you remove duplicates from a list in whilst preserving order?, usingx[1]
as the unique identifier:def unique_second_element(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x[1] in seen or seen_add(x[1]))]
Note that the
OrderedDict
approach also shown would also work if you wanted to preserve the last occurrence; for a first occurrence you'd have to reverse the input then reverse again for the output.You could make this even more generic by supporting a
key
function:def unique_preserve_order(seq, key=None): if key is None: key = lambda elem: elem seen = set() seen_add = seen.add augmented = ((key(x), x) for x in seq) return [x for k, x in augmented if not (k in seen or seen_add(k))]
then use
import operator unique_preserve_order(yourlist, key=operator.itemgetter(1))
Demo:
>>> def unique_preserve_order(seq, key=None): ... if key is None: ... key = lambda elem: elem ... seen = set() ... seen_add = seen.add ... augmented = ((key(x), x) for x in seq) ... return [x for k, x in augmented if not (k in seen or seen_add(k))] ... >>> from pprint import pprint >>> import operator >>> yourlist = [(67, u'top-coldestcitiesinamerica'), (66, u'ecofriendlyideastocelebrateindependenceday-phpapp'), (65, u'a-b-c-ca-d-ab-ea-d-c-c'), (64, u'a-b-c-ca-d-ab-ea-d-c-c'), (63, u'alexandre-meybeck-faowhatisclimate-smartagriculture-backgroundopportunitiesandchallenges'), (62, u'ghgemissions'), (61, u'top-coldestcitiesinamerica'), (58, u'infographicthe-stateofdigitaltransformationaltimetergroup'), (57, u'culture'), (55, u'cas-k-ihaveanidea'), (54, u'trendsfor'), (53, u'batteryimpedance'), (52, u'evs-howey-full'), (51, u'bericht'), (49, u'classiccarinsurance'), (47, u'uploaded_file'), (46, u'x_file'), (45, u's-s-main'), (44, u'vehicle-propulsion'), (43, u'x_file')] >>> pprint(unique_preserve_order(yourlist, operator.itemgetter(1))) [(67, u'top-coldestcitiesinamerica'), (66, u'ecofriendlyideastocelebrateindependenceday-phpapp'), (65, u'a-b-c-ca-d-ab-ea-d-c-c'), (63, u'alexandre-meybeck-faowhatisclimate-smartagriculture-backgroundopportunitiesandchallenges'), (62, u'ghgemissions'), (58, u'infographicthe-stateofdigitaltransformationaltimetergroup'), (57, u'culture'), (55, u'cas-k-ihaveanidea'), (54, u'trendsfor'), (53, u'batteryimpedance'), (52, u'evs-howey-full'), (51, u'bericht'), (49, u'classiccarinsurance'), (47, u'uploaded_file'), (46, u'x_file'), (45, u's-s-main'), (44, u'vehicle-propulsion')]
相关问答
更多-
假设您想要抑制所有元素的“重复”,而不仅仅是第一个元素,您可以使用: lst = [('a','b'), ('c', 'b'), ('a', 'd'), ('e','f'), ('a', 'b')] def merge(x): s = set() for i in x: if not s.intersection(i): yield i s.update(i) 给 >>> list(merge(lst)) [('a', 'b ...
-
每个列表中的第一个元素唯一标识每个列表。 太棒了,那我们先把它转换成字典: d = {x[0]: x[1:] for x in pairsList} # d: {1: [(11, 12), (13, 14)], 2: [(21, 22), (23, 24)], 3: [(31, 32), (13, 14)], 4: [(43, 44), (21, 22)]} 我们来索引整个数据结构: index = {} for k, vv in d.iteritems(): for v in vv: ...
-
这可以使用setdefault轻松完成: def foo(some_list): result = {} for k, v in some_list: result.setdefault(k, []).append(v) return result (使用setdefault比使用get(k, []) + [v]更有效get(k, []) + [v]因为我们在附加位置) This is easily done using setdefault: def ...
-
在hoogle上搜索Eq a => [a] -> [a] ,返回nub函数: nub函数从列表中删除重复的元素。 特别是,它只保留每个元素的第一次出现。 (名称nub的意思是“本质”。) 在文档中,更一般的情况是nubBy 。 也就是说,这是一个O(n^2)算法,可能效率不高。 如果值是Ord类型的实例,则可以使用Data.Set.fromList ,如下所示: import qualified Data.Set as Set nub' :: Ord a => [a] -> [a] nub' = Set. ...
-
您可以使用itertools.groupby因为您已经对这些值进行了排序。 这里是数据: >>> lot [(0.10507038451969995, 'Deadly stampede in Shanghai - Emergency personnel help victims.'), (0.07858638182141627, 'Deadly stampede in Shanghai - Police and medical staff help injured people after the sta ...
-
使用python从不同长度的元组列表中删除重复项(Remove duplicates from a list of tuples of different length with python)[2022-08-13]
您可以使用itertools.groupby轻松完成此操作 lst = [ [('Lord', 'Justice', 'Smith'), ('Lady', 'Justice', 'Smiles'), ('Lord', 'Other'), ('Lady', 'Another'), ('Lady', 'Diana', 'Spencer'), ('Lord', 'Dave', 'Castle')], [('Lord', 'Justice', 'Smith'), ('Lady', 'Justice', 'Smile ... -
使用集合库。 在下面的代码val_1中,val_2分别给出了元组的第一个元素和第二个元素的重复项。 import collections val_1=collections.Counter([x for (x,y) in a]) val_2=collections.Counter([y for (x,y) in a]) >>> print val_1 <<< Counter({1: 3, 2: 1, 6: 1}) 这是每个元组的第一个元素的出现次数 >>> print val_2 <<< Counter ...
-
您可以使用“ 如何从列表中删除重复项”中的set方法,同时保留顺序? ,使用x[1]作为唯一标识符: def unique_second_element(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x[1] in seen or seen_add(x[1]))] 请注意,如果您想保留最后一次出现, OrderedDict方法也会显示; 对于第一次发生,您必须将输入反向,然后再次反向输出。 ...
-
在元组列表中查找重复项(Find duplicates in a list of tuples)[2022-03-09]
我想我已经使用图表达到了优化的工作解决方案。 基本上,我创建了一个图形,每个节点都包含其用户信息和索引。 然后,使用dfs遍历图并找到重复项。 I think I've reached to an optimized working solution using a graph. Basically, I've created a graph with each node contains its user information and its index. Then, use dfs to traver ... -
如果您的列表相当小,则使用嵌套循环是可以的,但对于较大的列表很快就会变得效率低下。 例如,如果len(list1)== 10和len(list2)== 20,则内部循环内的代码将执行200次。 这是一种通过字典构建所需元组列表的算法。 字典将元组数据存储在列表中,因为它更有效:它可以附加到列表,而元组是不可变的,所以每次使用i = i + (j[0],)将项添加到元组的末尾时你实际上正在创建一个新的元组对象(以及临时(j[0],)元组),并丢弃与之绑定的旧元组对象。 list1 = [('abc', 1 ) ...